131 research outputs found

    Now for sure or later with a risk? Modeling risky intertemporal choice as accumulated preference

    Get PDF
    Research on risky and intertemporal decision-making often focuses on descriptive models of choice. This class of models sometimes lack a psychological process account of how different cognitive processes give rise to choice behavior. Here, we attempt to decompose these processes using sequential accumulator modeling (i.e., the linear ballistic accumulator model). Participants were presented with pairs of gambles that either involve different levels of probability or delay (Experiment 1) or a combination of these dimensions (both probability and delay; Experiment 2). Response times were recorded as a measure of preferential strength. We then combined choice data and response times, and utilized variants of the linear ballistic accumulator to explore different assumptions about how preferences are formed. Specifically, we show that a model that allows for the subjective evaluation of a fixed now/certain option to change as a function of the delayed/risky option with which it is paired provides the best account of the combined choice and response time (RT) data. The work highlights the advantages of using cognitive process models in risky and intertemporal choice, and points toward a common framework for understanding how people evaluate time and probability

    Weekly reports for R.V. Polarstern expedition PS103 (2016-12-16 - 2017-02-03, Cape Town - Punta Arenas), German and English version

    Get PDF
    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1(N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMOsys model

    Perspectives on scientific error

    Get PDF
    Theoretical arguments and empirical investigations indicate that a high proportion of published findings do not replicate and are likely false. The current position paper provides a broad perspective on scientific error, which may lead to replication failures. This broad perspective focuses on reform history and on opportunities for future reform. We organize our perspective along four main themes: institutional reform, methodological reform, statistical reform and publishing reform. For each theme, we illustrate potential errors by narrating the story of a fictional researcher during the research cycle. We discuss future opportunities for reform. The resulting agenda provides a resource to usher in an era that is marked by a research culture that is less error-prone and a scientific publication landscape with fewer spurious findings

    Recognising and reacting to angry and happy facial expressions: a diffusion model analysis.

    Get PDF
    Researchers have reported two biases in how people recognise and respond to angry and happy facial expressions: (1) a gender-expression bias (Becker et al. in J Pers Soc Psychol, 92(2):179-190, https://doi.org/10.1037/0022-3514.92.2.179 , 2007)-faster identification of male faces as angry and female faces as happy and (2) an approach-avoidance bias-faster avoidance of people who appear angry and faster approach responses people who appear happy (Heuer et al. in Behav Res The, 45(12):2990-3001, https://doi.org/10.1016/j.brat.2007.08.010 2007; Marsh et al. in Emotion, 5(1), 119-124, https://doi.org/10.1037/1528-3542.5.1.119 , 2005; Rotteveel and Phaf in Emotion 4(2):156-172, https://doi.org/10.1037/1528-3542.4.2.156 , 2004). The aim of the current research is to gain insight into the nature of such biases by applying the drift diffusion model to the results of an approach-avoidance task. Sixty-five participants (33 female) identified faces as either happy or angry by pushing and pulling a joystick. In agreement with the original study of this effect (Solarz 1960) there were clear participant gender differences-both the approach avoidance and gender-expression biases were larger in magnitude for female compared to male participants. The diffusion model results extend recent research (Krypotos et al. in Cogn Emot 29(8):1424-1444, https://doi.org/10.1080/02699931.2014.985635 , 2015) by indicating that the gender-expression and approach-avoidance biases are mediated by separate cognitive processes

    Mathematically aggregating experts' predictions of possible futures

    Get PDF
    Structured protocols offer a transparent and systematic way to elicit and combine/aggregate, probabilistic predictions from multiple experts. These judgements can be aggregated behaviourally or mathematically to derive a final group prediction. Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. The quality of this aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the “best” final prediction. When experts’ performance can be scored on similar questions ahead of time, these scores can be translated into performance-based weights, and a performance-based weighted aggregation can then be used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. Here, we develop a suite of aggregation methods, informed by previous experience and the available literature. We differentially weight our experts’ estimates by measures of reasoning, engagement, openness to changing their mind, informativeness, prior knowledge, and extremity, asymmetry or granularity of estimates. Next, we investigate the relative performance of these aggregation methods using three datasets. The main goal of this research is to explore how measures of knowledge and behaviour of individuals can be leveraged to produce a better performing combined group judgment. Although the accuracy, calibration, and informativeness of the majority of methods are very similar, a couple of the aggregation methods consistently distinguish themselves as among the best or worst. Moreover, the majority of methods outperform the usual benchmarks provided by the simple average or the median of estimates

    Bayes Factors for Mixed Models: a Discussion

    Get PDF
    van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison

    A review of applications of the Bayes factor in psychological research

    Get PDF
    The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The paper is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines
    corecore